DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.
The train.csv data set provided by DonorsChoose contains the following features:
| Feature | Description |
|---|---|
project_id |
A unique identifier for the proposed project. Example: p036502 |
project_title |
Title of the project. Examples:
|
project_grade_category |
Grade level of students for which the project is targeted. One of the following enumerated values:
|
project_subject_categories |
One or more (comma-separated) subject categories for the project from the following enumerated list of values:
Examples:
|
school_state |
State where school is located (Two-letter U.S. postal code). Example: WY |
project_subject_subcategories |
One or more (comma-separated) subject subcategories for the project. Examples:
|
project_resource_summary |
An explanation of the resources needed for the project. Example:
|
project_essay_1 |
First application essay* |
project_essay_2 |
Second application essay* |
project_essay_3 |
Third application essay* |
project_essay_4 |
Fourth application essay* |
project_submitted_datetime |
Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245 |
teacher_id |
A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56 |
teacher_prefix |
Teacher's title. One of the following enumerated values:
|
teacher_number_of_previously_posted_projects |
Number of project applications previously submitted by the same teacher. Example: 2 |
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:
| Feature | Description |
|---|---|
id |
A project_id value from the train.csv file. Example: p036502 |
description |
Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25 |
quantity |
Quantity of the resource required. Example: 3 |
price |
Price of the resource required. Example: 9.95 |
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:
The data set contains the following label (the value you will attempt to predict):
| Label | Description |
|---|---|
project_is_approved |
A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved. |
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
from plotly import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
labels=project_data['project_is_approved']
project_data.drop(['project_is_approved'],axis=1,inplace=True)
labels=labels.head(50000)
project_data=project_data[0:50000]
project_data.head(1)
from sklearn.model_selection import train_test_split
project_data_train, project_data_test, labels_train, labels_test = train_test_split(project_data, labels , test_size=0.33, stratify=labels)
print(project_data_train.shape)
print(project_data_test.shape)
print(labels_train.shape)
print(labels_test.shape)
project_data_train.head(2)
labels=list(labels_train)
ids=list(project_data_train['id'])
data={'labels':labels, 'id':ids}
df=pd.DataFrame(data)
print(df.head(2))
project_data_train = pd.merge(project_data_train, df, on='id', how='left').reset_index()
project_data_train.head(2)
project_data_train.drop(['Unnamed: 0','index'],axis=1,inplace=True)
project_data_train.head(2)
project_subject_categories - Train Data¶catogories = (project_data_train['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data_train['clean_categories'] = cat_list
project_data_train.drop(['project_subject_categories'], axis=1, inplace=True)
unique_list = []
for x in cat_list:
if x not in unique_list:
unique_list.append(x)
#print(unique_list)
categories=pd.DataFrame({'clean_categories': unique_list})
categories=categories.sort_values(['clean_categories'], ascending=True).reset_index()
print(categories.head(2))
df1=project_data_train[['clean_categories','labels']][(project_data_train['labels']==1)]
print(df1.head(2))
df2=project_data_train[['clean_categories','labels']][(project_data_train['labels']==0)]
z =df1.groupby(['clean_categories'])['labels'].value_counts() /project_data_train.groupby(['clean_categories'])['labels'].count()
group_1=pd.DataFrame(z)
group_1=group_1.reset_index(drop=True)
print(group_1.head(2))
z1 =df2.groupby(['clean_categories'])['labels'].value_counts() /project_data_train.groupby(['clean_categories'])['labels'].count()
group_0=pd.DataFrame(z1)
group_0=group_0.reset_index(drop=True)
print(group_0.head(2))
x1= df1.groupby(['clean_categories'])['labels'].value_counts()
class_1=pd.DataFrame(x1)
class_1=class_1.reset_index(drop=True)
print ( class_1.head(2))
x0= df2.groupby(['clean_categories'])['labels'].value_counts()
class_0=pd.DataFrame(x0)
class_0=class_0.reset_index(drop=True)
print ( class_0.head(2))
Response_Table = pd.concat([categories, class_0, class_1],axis=1)
#taken from https://stackoverflow.com/questions/24685012/pandas-dataframe-renaming-multiple-identically-named-columns
def df_column_uniquify(df):
df_columns = df.columns
new_columns = []
for item in df_columns:
counter = 0
newitem = item
while newitem in new_columns:
counter += 1
newitem = "{}_{}".format(item, counter)
new_columns.append(newitem)
df.columns = new_columns
return df
Response_Table = df_column_uniquify(Response_Table)
Response_Table.rename(columns={'labels':'Class=0','labels_1':'Class=1'},inplace=True)
print("Response Table for Categories")
Response_Table
category_1 = pd.concat([categories,group_0,group_1],axis=1).reset_index()
category_1
category_1.drop(['level_0','index'],axis=1,inplace=True)
category_1.head(2)
category_1 = df_column_uniquify(category_1)
category_1.head(2)
category_1.rename(columns={'labels':'Category_0','labels_1':'Category_1'},inplace=True)
category_1["Category_0"].fillna( method ='ffill', inplace = True)
category_1["Category_1"].fillna( method ='ffill', inplace = True)
project_data_train = pd.merge(project_data_train, category_1, on='clean_categories', how='left').reset_index()
project_data_train.drop(['index','clean_categories'],axis=1, inplace=True)
project_data_train.head(1)
project_subject_categories - Test Data¶catogories = list(project_data_test['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data_test['clean_categories'] = cat_list
project_data_test.drop(['project_subject_categories'], axis=1, inplace=True)
unique_list_test = []
for x in cat_list:
if x not in unique_list_test:
unique_list_test.append(x)
#https://stackoverflow.com/questions/41125909/python-find-elements-in-one-list-that-are-not-in-the-other
difference=list(set(unique_list_test).difference(unique_list))
print(difference)
df1=pd.DataFrame([['Music_Arts Warmth Care_Hunger',0.5,0.5]],columns=['clean_categories','Category_0','Category_1'])
category_1=category_1.append(df1, ignore_index = True)
project_data_test = pd.merge(project_data_test, category_1, on='clean_categories', how='left').reset_index()
project_data_test.drop(['clean_categories','Unnamed: 0'],axis=1, inplace=True)
project_subject_subcategories - Train Data¶sub_catogories = list(project_data_train['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data_train['clean_subcategories'] = sub_cat_list
project_data_train.drop(['project_subject_subcategories'], axis=1, inplace=True)
unique_list = []
for x in sub_cat_list:
if x not in unique_list:
unique_list.append(x)
categories=pd.DataFrame({'clean_subcategories': unique_list})
categories=categories.sort_values(['clean_subcategories'], ascending=True).reset_index()
df1=project_data_train[['clean_subcategories','labels']][(project_data_train['labels']==1)]
df2=project_data_train[['clean_subcategories','labels']][(project_data_train['labels']==0)]
z =df1.groupby(['clean_subcategories'])['labels'].value_counts() /project_data_train.groupby(['clean_subcategories'])['labels'].count()
group_1=pd.DataFrame(z)
group_1=group_1.reset_index(drop=True)
z1 =df2.groupby(['clean_subcategories'])['labels'].value_counts() /project_data_train.groupby(['clean_subcategories'])['labels'].count()
group_0=pd.DataFrame(z1)
group_0=group_0.reset_index(drop=True)
x1= df1.groupby(['clean_subcategories'])['labels'].value_counts()
class_1=pd.DataFrame(x1)
class_1=class_1.reset_index(drop=True)
x0= df2.groupby(['clean_subcategories'])['labels'].value_counts()
class_0=pd.DataFrame(x0)
class_0=class_0.reset_index(drop=True)
Response_Table = pd.concat([categories, class_0, class_1],axis=1)
Response_Table = df_column_uniquify(Response_Table)
Response_Table.rename(columns={'labels':'Class=0','labels_1':'Class=1'},inplace=True)
print("Response Table for Sub-Categories")
Response_Table
category_1 = pd.concat([categories,group_0,group_1],axis=1).reset_index()
category_1.head(2)
category_1.drop(['index'],axis=1,inplace=True)
category_1.head(2)
category_1 = df_column_uniquify(category_1)
category_1.head(2)
category_1.rename(columns={'labels':'SubCategory_0','labels_1':'SubCategory_1'},inplace=True)
category_1.head(2)
category_1["SubCategory_0"].fillna( method ='ffill', inplace = True)
category_1["SubCategory_1"].fillna( method ='ffill', inplace = True)
project_data_train = pd.merge(project_data_train, category_1, on='clean_subcategories', how='left').reset_index()
project_data_train.drop(['clean_subcategories'],axis=1, inplace=True)
project_subject_subcategories - Test Data¶sub_catogories = list(project_data_test['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data_test['clean_subcategories'] = sub_cat_list
project_data_test.drop(['project_subject_subcategories'], axis=1, inplace=True)
unique_list_test = []
for x in sub_cat_list:
if x not in unique_list_test:
unique_list_test.append(x)
#https://stackoverflow.com/questions/41125909/python-find-elements-in-one-list-that-are-not-in-the-other
difference=list(set(unique_list_test).difference(unique_list))
print(difference)
df1=pd.DataFrame([['Other Warmth Care_Hunger',0.5,0.5],['Civics_Government Extracurricular',0.5,0.5],['CommunityService NutritionEducation',0.5,0.5],['Gym_Fitness SocialSciences',0.5,0.5],['ParentInvolvement Warmth Care_Hunger',0.5,0.5],['CommunityService Economics',0.5,0.5],['Mathematics Warmth Care_Hunger',0.5,0.5],['FinancialLiteracy VisualArts',0.5,0.5],['ESL Gym_Fitness',0.5,0.5],['ForeignLanguages PerformingArts',0.5,0.5],['College_CareerPrep Warmth Care_Hunger',0.5,0.5],['VisualArts Warmth Care_Hunger',0.5,0.5],['CharacterEducation Warmth Care_Hunger',0.5,0.5],['CommunityService EarlyDevelopment',0.5,0.5],['ParentInvolvement TeamSports',0.5,0.5]],columns=['clean_subcategories','SubCategory_0','SubCategory_1'])
category_1=category_1.append(df1, ignore_index = True)
category_1.drop(['level_0'],axis=1,inplace=True)
category_1.head(1)
project_data_test = pd.merge(project_data_test, category_1, on='clean_subcategories', how='left')
# merge two column text dataframe:
project_data_train["essay"] = project_data_train["project_essay_1"].map(str) +\
project_data_train["project_essay_2"].map(str) + \
project_data_train["project_essay_3"].map(str) + \
project_data_train["project_essay_4"].map(str)
# printing some random reviews
print(project_data_train['essay'].values[0])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data_train['essay'].values[0])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
# Combining all the above stundents
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data_train['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_essays.append(sent.lower().strip())
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
my_counter = Counter()
for word in preprocessed_essays:
my_counter.update(word.split())
essay_dict = dict(my_counter)
sorted_essays_dict = dict(sorted(essay_dict.items(), key=lambda kv: kv[1]))
preprocessed_essays[0]
# merge two column text dataframe:
project_data_test["essay"] = project_data_test["project_essay_1"].map(str) +\
project_data_test["project_essay_2"].map(str) + \
project_data_test["project_essay_3"].map(str) + \
project_data_test["project_essay_4"].map(str)
# printing some random essays.
print(project_data_test['essay'].values[0])
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent_test = decontracted(project_data_test['essay'].values[0])
print(sent_test)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent_test = sent_test.replace('\\r', ' ')
sent_test = sent_test.replace('\\"', ' ')
sent_test = sent_test.replace('\\n', ' ')
print(sent_test)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent_test = re.sub('[^A-Za-z0-9]+', ' ', sent_test)
print(sent_test)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
# Combining all the above statemennts
from tqdm import tqdm
preprocessed_essays_test = []
# tqdm is for printing the status bar
for sentence in tqdm(project_data_test['essay'].values):
sent_cv = decontracted(sentence)
sent_cv = sent_cv.replace('\\r', ' ')
sent_cv = sent_cv.replace('\\"', ' ')
sent_cv = sent_cv.replace('\\n', ' ')
sent_cv = re.sub('[^A-Za-z0-9]+', ' ', sent_cv)
# https://gist.github.com/sebleier/554280
sent_cv = ' '.join(e for e in sent_cv.split() if e not in stopwords)
preprocessed_essays_test.append(sent_cv.lower().strip())
# after preprocesing
preprocessed_essays_test[0]
# printing some random title.
print(project_data_train['project_title'].values[0])
print("="*50)
sent = decontracted(project_data_train['project_title'].values[0])
print(sent)
print("="*50)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
# Combining all the above statemennts
from tqdm import tqdm
preprocessed_title = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data_train['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_title.append(sent.lower().strip())
my_counter = Counter()
for word in preprocessed_title:
my_counter.update(word.split())
title_dict = dict(my_counter)
sorted_title_dict = dict(sorted(title_dict.items(), key=lambda kv: kv[1]))
# after preprocesing
preprocessed_title[0]
# printing some random title.
print(project_data_test['project_title'].values[0])
print("="*50)
# Combining all the above statemennts
from tqdm import tqdm
preprocessed_title_test = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data_test['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_title_test.append(sent.lower().strip())
# after preprocesing
preprocessed_title_test[0]
project_data_train.columns
project_data_test.columns
state = list(project_data_train['school_state'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
state_list = []
for i in state:
temp = ""
for j in i.split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
state_list.append(temp.strip())
project_data_train['clean_state'] = state_list
project_data_train.drop(['school_state'], axis=1, inplace=True)
unique_list = []
for x in state_list:
if x not in unique_list:
unique_list.append(x)
categories=pd.DataFrame({'clean_state': unique_list})
categories=categories.sort_values(['clean_state'], ascending=True).reset_index()
df1=project_data_train[['clean_state','labels']][(project_data_train['labels']==1)]
df2=project_data_train[['clean_state','labels']][(project_data_train['labels']==0)]
z =df1.groupby(['clean_state'])['labels'].value_counts() /project_data_train.groupby(['clean_state'])['labels'].count()
group_1=pd.DataFrame(z)
group_1=group_1.reset_index(drop=True)
print(group_1.head(2))
z1 =df2.groupby(['clean_state'])['labels'].value_counts() /project_data_train.groupby(['clean_state'])['labels'].count()
group_0=pd.DataFrame(z1)
group_0=group_0.reset_index(drop=True)
print(group_0.head(2))
x1= df1.groupby(['clean_state'])['labels'].value_counts()
class_1=pd.DataFrame(x1)
class_1=class_1.reset_index(drop=True)
print ( class_1.head(2))
x0= df2.groupby(['clean_state'])['labels'].value_counts()
class_0=pd.DataFrame(x0)
class_0=class_0.reset_index(drop=True)
print ( class_0.head(2))
Response_Table = pd.concat([categories, class_0, class_1],axis=1)
Response_Table = df_column_uniquify(Response_Table)
Response_Table.rename(columns={'labels':'Class=0','labels_1':'Class=1'},inplace=True)
print("Response Table for State")
Response_Table
category_1 = pd.concat([categories,group_0,group_1],axis=1).reset_index()
category_1.head(2)
category_1.drop(['level_0','index'],axis=1,inplace=True)
print("Response Table For Categories")
category_1.head(2)
category_1 = df_column_uniquify(category_1)
category_1.head(2)
category_1.rename(columns={'labels':'State_0','labels_1':'State_1'},inplace=True)
category_1.head(2)
category_1["State_0"].fillna( method ='ffill', inplace = True)
category_1["State_1"].fillna( method ='ffill', inplace = True)
project_data_train = pd.merge(project_data_train, category_1, on='clean_state', how='left')
project_data_train.drop(['clean_state'],axis=1, inplace=True)
Cat_0 = list(project_data_train['State_0'].values)
Category_Class_0 = []
for i in Cat_0:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("NaN",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Category_Class_0.append(temp.strip())
project_data_train['State_Class_0'] = Category_Class_0
project_data_train.drop(['State_0'], axis=1, inplace=True)
Cat_1 = list(project_data_train['State_1'].values)
Category_Class_1 = []
for i in Cat_1:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("NaN",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Category_Class_1.append(temp.strip())
project_data_train['State_Class_1'] = Category_Class_1
project_data_train.drop(['State_1'], axis=1, inplace=True)
state = list(project_data_test['school_state'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
state_list_test = []
for i in state:
temp = ""
for j in i.split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
state_list_test.append(temp.strip())
project_data_test['clean_state'] = state_list_test
project_data_test.drop(['school_state'], axis=1, inplace=True)
unique_list_test = []
for x in state_list_test:
if x not in unique_list_test:
unique_list_test.append(x)
#https://stackoverflow.com/questions/41125909/python-find-elements-in-one-list-that-are-not-in-the-other
difference=list(set(unique_list_test).difference(unique_list))
print(difference)
project_data_test = pd.merge(project_data_test, category_1, on='clean_state', how='left')
project_data_test.drop(['clean_state'],axis=1, inplace=True)
State_0 = list(project_data_test['State_0'].values)
State_Class_0 = []
for i in State_0:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
State_Class_0.append(temp.strip())
project_data_test['State_Class_0'] = State_Class_0
project_data_test.drop(['State_0'], axis=1, inplace=True)
State_1 = list(project_data_test['State_1'].values)
State_Class_1 = []
for i in State_1:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
State_Class_1.append(temp.strip())
project_data_test['State_Class_1'] = State_Class_1
project_data_test.drop(['State_1'], axis=1, inplace=True)
grade = list(project_data_train['project_grade_category'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
grade_list = []
for i in grade:
temp = ""
for j in i.split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
grade_list.append(temp.strip())
project_data_train['clean_grade'] = grade_list
project_data_train.drop(['project_grade_category'], axis=1, inplace=True)
unique_list = []
for x in grade_list:
if x not in unique_list:
unique_list.append(x)
categories=pd.DataFrame({'clean_grade': unique_list})
categories=categories.sort_values(['clean_grade'], ascending=True).reset_index()
df1=project_data_train[['clean_grade','labels']][(project_data_train['labels']==1)]
print(df1.head(2))
df2=project_data_train[['clean_grade','labels']][(project_data_train['labels']==0)]
z =df1.groupby(['clean_grade'])['labels'].value_counts() /project_data_train.groupby(['clean_grade'])['labels'].count()
group_1=pd.DataFrame(z)
group_1=group_1.reset_index(drop=True)
print(group_1.head(2))
z1 =df2.groupby(['clean_grade'])['labels'].value_counts() /project_data_train.groupby(['clean_grade'])['labels'].count()
group_0=pd.DataFrame(z1)
group_0=group_0.reset_index(drop=True)
print(group_0.head(2))
x1= df1.groupby(['clean_grade'])['labels'].value_counts()
class_1=pd.DataFrame(x1)
class_1=class_1.reset_index(drop=True)
print ( class_1.head(2))
x0= df2.groupby(['clean_grade'])['labels'].value_counts()
class_0=pd.DataFrame(x0)
class_0=class_0.reset_index(drop=True)
print ( class_0.head(2))
Response_Table = pd.concat([categories, class_0, class_1],axis=1)
Response_Table = df_column_uniquify(Response_Table)
Response_Table.rename(columns={'labels':'Class=0','labels_1':'Class=1'},inplace=True)
Response_Table
category_1 = pd.concat([categories,group_0,group_1],axis=1).reset_index()
category_1.head(2)
category_1.drop(['level_0','index'],axis=1,inplace=True)
print("Response Table For Categories")
category_1.head(2)
category_1 = df_column_uniquify(category_1)
category_1.head(2)
category_1.rename(columns={'labels':'Grade_0','labels_1':'Grade_1'},inplace=True)
category_1.head(2)
category_1["Grade_0"].fillna( method ='ffill', inplace = True)
category_1["Grade_1"].fillna( method ='ffill', inplace = True)
project_data_train = pd.merge(project_data_train, category_1, on='clean_grade', how='left')
project_data_train.drop(['clean_grade'],axis=1, inplace=True)
grade = list(project_data_test['project_grade_category'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
grade_list_test = []
for i in grade:
temp = ""
for j in i.split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("NaN",'')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
grade_list_test.append(temp.strip())
project_data_test['clean_grade'] = grade_list_test
project_data_test.drop(['project_grade_category'], axis=1, inplace=True)
unique_list_test = []
for x in grade_list_test:
if x not in grade_list_test:
unique_list_test.append(x)
#https://stackoverflow.com/questions/41125909/python-find-elements-in-one-list-that-are-not-in-the-other
difference=list(set(unique_list_test).difference(unique_list))
print(difference)
project_data_test = pd.merge(project_data_test, category_1, on='clean_grade', how='left')
project_data_test.drop(['clean_grade'],axis=1, inplace=True)
State_0 = list(project_data_test['Grade_0'].values)
State_Class_0 = []
for i in State_0:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
State_Class_0.append(temp.strip())
project_data_test['Grade_Class_0'] = State_Class_0
project_data_test.drop(['Grade_0'], axis=1, inplace=True)
State_1 = list(project_data_test['Grade_1'].values)
State_Class_1 = []
for i in State_1:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
State_Class_1.append(temp.strip())
project_data_test['Grade_Class_1'] = State_Class_1
project_data_test.drop(['Grade_1'], axis=1, inplace=True)
prefix = list(project_data_train['teacher_prefix'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
prefix_list = []
for i in prefix:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
prefix_list.append(temp.strip())
project_data_train['clean_prefix'] = prefix_list
project_data_train.drop(['teacher_prefix'], axis=1, inplace=True)
project_data_train.head(2)
unique_list = []
for x in prefix_list:
if x not in unique_list:
unique_list.append(x)
categories=pd.DataFrame({'clean_prefix': unique_list})
categories=categories.sort_values(['clean_prefix'], ascending=True).reset_index()
print(categories.head(2))
df1=project_data_train[['clean_prefix','labels']][(project_data_train['labels']==1)]
df2=project_data_train[['clean_prefix','labels']][(project_data_train['labels']==0)]
z =df1.groupby(['clean_prefix'])['labels'].value_counts() /project_data_train.groupby(['clean_prefix'])['labels'].count()
group_1=pd.DataFrame(z)
group_1=group_1.reset_index(drop=True)
z1 =df2.groupby(['clean_prefix'])['labels'].value_counts() /project_data_train.groupby(['clean_prefix'])['labels'].count()
group_0=pd.DataFrame(z1)
group_0=group_0.reset_index(drop=True)
x1= df1.groupby(['clean_prefix'])['labels'].value_counts()
class_1=pd.DataFrame(x1)
class_1=class_1.reset_index(drop=True)
x0= df2.groupby(['clean_prefix'])['labels'].value_counts()
class_0=pd.DataFrame(x0)
class_0=class_0.reset_index(drop=True)
Response_Table = pd.concat([categories, class_0, class_1],axis=1)
Response_Table = df_column_uniquify(Response_Table)
Response_Table.rename(columns={'labels':'Class=0','labels_1':'Class=1'},inplace=True)
Response_Table
category_1 = pd.concat([categories,group_0,group_1],axis=1).reset_index()
category_1.drop(['level_0','index'],axis=1,inplace=True)
category_1 = df_column_uniquify(category_1)
category_1.rename(columns={'labels':'Prefix_0','labels_1':'Prefix_1'},inplace=True)
category_1["Prefix_0"].fillna( method ='ffill', inplace = True)
category_1["Prefix_1"].fillna( method ='ffill', inplace = True)
project_data_train = pd.merge(project_data_train, category_1, on='clean_prefix', how='left')
project_data_train.drop(['clean_prefix'],axis=1, inplace=True)
#project_data_train.head(1)
Cat_0 = list(project_data_train['Prefix_0'].values)
Category_Class_0 = []
for i in Cat_0:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("NaN",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Category_Class_0.append(temp.strip())
project_data_train['Prefix_Class_0'] = Category_Class_0
project_data_train.drop(['Prefix_0'], axis=1, inplace=True)
Cat_1 = list(project_data_train['Prefix_1'].values)
Category_Class_1 = []
for i in Cat_1:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("NaN",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Category_Class_1.append(temp.strip())
project_data_train['Prefix_Class_1'] = Category_Class_1
project_data_train.drop(['Prefix_1'], axis=1, inplace=True)
prefix = list(project_data_test['teacher_prefix'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
prefix_list_test = []
for i in prefix:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
prefix_list_test.append(temp.strip())
project_data_test['clean_prefix'] = prefix_list_test
project_data_test.drop(['teacher_prefix'], axis=1, inplace=True)
unique_list_test = []
for x in prefix_list_test:
if x not in unique_list_test:
unique_list_test.append(x)
#https://stackoverflow.com/questions/41125909/python-find-elements-in-one-list-that-are-not-in-the-other
difference=list(set(unique_list_test).difference(unique_list))
print(difference)
df1=pd.DataFrame([[' '' ' ,0.5,0.5]],columns=['clean_prefix','Prefix_0','Prefix_1'])
category_1=category_1.append(df1, ignore_index = True)
project_data_test = pd.merge(project_data_test, category_1, on='clean_prefix', how='left')
project_data_test.drop(['clean_prefix'],axis=1, inplace=True)
Prefix_0 = list(project_data_test['Prefix_0'].values)
Prefix_0 = []
for i in Prefix_0:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Prefix_0.append(temp.strip())
project_data_test['Prefix_Class_0'] = State_Class_0
project_data_test.drop(['Prefix_0'], axis=1, inplace=True)
Prefix_1 = list(project_data_test['Prefix_1'].values)
Prefix_1 = []
for i in Prefix_1:
temp = ""
for j in str(i).split(','): # it will split it in parts
if 'The' in j.split(): # this will split each of the state based on space
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty)
j = j.replace("nan",'0')
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
Prefix_1.append(temp.strip())
project_data_test['Prefix_Class_1'] = State_Class_1
project_data_test.drop(['Prefix_1'], axis=1, inplace=True)
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer6 = CountVectorizer(min_df=10, lowercase=False, binary=True)
text_bow = vectorizer6.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_bow.shape)
#print(text_bow)
vectorizer7=CountVectorizer(lowercase=False, binary=True, min_df=0)
title_bow = vectorizer7.fit_transform(preprocessed_title)
print("Shape of matrix after one hot encoding ",title_bow.shape)
# We are considering only the words which appeared in at least 10 documents(rows or projects).
#vectorizer = CountVectorizer(min_df=10, ngram_range=(2,2), lowercase=False, binary=True, max_features=5000, )
text_bow_test = vectorizer6.transform(preprocessed_essays_test)
print("Shape of matrix after one hot encodig ",text_bow_test.shape)
# We are considering only the words which appeared in at least 10 documents(rows or titles).
#vectorizer = CountVectorizer(vocabulary=list(sorted_title_dict.keys()), lowercase=False, binary=True, min_df=0)
title_bow_test = vectorizer7.transform(preprocessed_title_test)
print("Shape of matrix after one hot encoding ",title_bow_test.shape)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer8 = TfidfVectorizer(min_df=10,lowercase=False, binary=True, max_features=5000)
text_tfidf = vectorizer8.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_tfidf.shape)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer9 = TfidfVectorizer(min_df=0, lowercase=False, binary=True, max_features=5000)
tittle_tfidf = vectorizer9.fit_transform(preprocessed_title)
print("Shape of matrix after one hot encoding ",tittle_tfidf.shape)
#from sklearn.feature_extraction.text import TfidfVectorizer
#vectorizer = TfidfVectorizer(vocabulary=sorted_essays_dict.keys(), lowercase=False, binary=True, min_df=10)
vectorizer8.fit(preprocessed_essays_test)
text_tfidf_test = vectorizer8.transform(preprocessed_essays_test)
print("Shape of matrix after one hot encodig ",text_tfidf_test.shape)
#from sklearn.feature_extraction.text import TfidfVectorizer
#vectorizer = TfidfVectorizer(vocabulary=list(sorted_title_dict.keys()), lowercase=False, binary=True, min_df=0)
vectorizer9.fit(preprocessed_title_test)
title_tfidf_test = vectorizer9.transform(preprocessed_title_test)
print("Shape of matrix after one hot encodig ",title_tfidf_test.shape)
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
print ("Loading Glove Model")
f = open(gloveFile,'r', encoding="utf8")
model = {}
for line in tqdm(f):
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
print ("Done.",len(model)," words loaded!")
return model
model = loadGloveModel('glove.42B.300d.txt')
words = []
for i in preprocessed_essays:
words.extend(i.split(' '))
for i in preprocessed_title:
words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))
inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")
words_courpus = {}
words_glove = set(model.keys())
for i in words:
if i in words_glove:
words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
import pickle
with open('glove_vectors', 'wb') as f:
pickle.dump(words_courpus, f)
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors.append(vector)
print(len(avg_w2v_vectors))
print(len(avg_w2v_vectors[0]))
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_tittle = []; # the avg-w2v for each title is stored in this list
for sentence in tqdm(preprocessed_title): # for each title
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the title
for word in sentence.split(): # for each word in a title
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_tittle.append(vector)
print(len(avg_w2v_vectors_tittle))
print(len(avg_w2v_vectors_tittle[0]))
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays_test): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_test.append(vector)
print(len(avg_w2v_vectors_test))
print(len(avg_w2v_vectors_test[0]))
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_tittle_test = []; # the avg-w2v for each title is stored in this list
for sentence in tqdm(preprocessed_title_test): # for each title
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the title
for word in sentence.split(): # for each word in a title
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_tittle_test.append(vector)
print(len(avg_w2v_vectors_tittle_test))
print(len(avg_w2v_vectors_tittle_test[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model_essays = TfidfVectorizer()
tfidf_model_essays.fit_transform(preprocessed_essays)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model_essays.get_feature_names(), list(tfidf_model_essays.idf_)))
tfidf_words = set(tfidf_model_essays.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors.append(vector)
print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model_title = TfidfVectorizer()
tfidf_model_title.fit_transform(preprocessed_title)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model_title.get_feature_names(), list(tfidf_model_title.idf_)))
tfidf_words = set(tfidf_model_title.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_Title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_title): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_Title.append(vector)
print(len(tfidf_w2v_vectors_Title))
print(len(tfidf_w2v_vectors_Title[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
#vectorizer = TfidfVectorizer(vocabulary=sorted_essays_dict.keys(), lowercase=False, binary=True, min_df=10)
tfidf_model_essays.fit(preprocessed_essays_test)
tfidf_model_essays.transform(preprocessed_essays_test)
# we are converting a dictionary with word as a key, and the idf as a value
#dictionary = dict(zip(tfidf_model_essays.get_feature_names(), list(tfidf_model_essays.idf_)))
#tfidf_words = set(tfidf_model_essays.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays_test): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_test.append(vector)
print(len(tfidf_w2v_vectors_test))
print(len(tfidf_w2v_vectors_test[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
#tfidf_model = TfidfVectorizer(vocabulary=sorted_title_dict.keys(), lowercase=False, binary=True, min_df=0)
tfidf_model_title.fit(preprocessed_title_test)
tfidf_model_title.transform(preprocessed_title_test)
# we are converting a dictionary with word as a key, and the idf as a value
#dictionary = dict(zip(tfidf_model_title.get_feature_names(), list(tfidf_model_title.idf_)))
#tfidf_words = set(tfidf_model_title.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_Title_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_title_test): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_Title_test.append(vector)
print(len(tfidf_w2v_vectors_Title_test))
print(len(tfidf_w2v_vectors_Title_test[0]))
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
project_data_train = pd.merge(project_data_train, price_data, on='id', how='left')
# check this one: https://www.youtube.com/watch?v=0HOqOcln3Z4&t=530s
# standardization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# price_standardized = standardScalar.fit(project_data['price'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
price_scalar = StandardScaler()
price_scalar.fit_transform(project_data_train['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
price_standardized = price_scalar.transform(project_data_train['price'].values.reshape(-1, 1))
price_standardized.shape
import warnings
warnings.filterwarnings("ignore")
quantity_scalar = StandardScaler()
quantity_scalar.fit_transform(project_data_train['quantity'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
quantity_standardized = quantity_scalar.transform(project_data_train['quantity'].values.reshape(-1, 1))
quantity_standardized
import warnings
warnings.filterwarnings("ignore")
teacher_number_of_previously_posted_projects_scalar = StandardScaler()
teacher_number_of_previously_posted_projects_scalar.fit_transform(project_data_train['teacher_number_of_previously_posted_projects'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {teacher_number_of_previously_posted_projects_scalar.mean_[0]}, Standard deviation : {np.sqrt(teacher_number_of_previously_posted_projects_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
teacher_number_of_previously_posted_projects_standardized = teacher_number_of_previously_posted_projects_scalar.transform(project_data_train['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
teacher_number_of_previously_posted_projects_standardized
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
project_data_test = pd.merge(project_data_test, price_data, on='id', how='left')
# check this one: https://www.youtube.com/watch?v=0HOqOcln3Z4&t=530s
# standardization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# price_standardized = standardScalar.fit(project_data['price'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
#price_scalar = StandardScaler()
price_scalar.fit(project_data_test['price'].values.reshape(-1,1))
price_scalar.transform(project_data_test['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
price_standardized_test = price_scalar.transform(project_data_test['price'].values.reshape(-1, 1))
price_standardized_test.shape
#quantity_scalar = StandardScaler()
quantity_scalar.fit_transform(project_data_test['quantity'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {quantity_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
quantity_test_standardized = quantity_scalar.transform(project_data_test['quantity'].values.reshape(-1, 1))
quantity_test_standardized.shape
import warnings
warnings.filterwarnings("ignore")
teacher_number_of_previously_posted_projects_scalar.fit(project_data_test['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
teacher_number_of_previously_posted_projects_scalar.transform(project_data_test['teacher_number_of_previously_posted_projects'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {teacher_number_of_previously_posted_projects_scalar.mean_[0]}, Standard deviation : {np.sqrt(teacher_number_of_previously_posted_projects_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
teacher_number_of_previously_posted_projects_standardized_test = teacher_number_of_previously_posted_projects_scalar.transform(project_data_test['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
teacher_number_of_previously_posted_projects_standardized_test.shape
project_data_train.columns
import warnings
warnings.filterwarnings("ignore")
Category_0 = StandardScaler()
Category_0.fit_transform(project_data_train['Category_0'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Category_0_train_standardized = Category_0.transform(project_data_train['Category_0'].values.reshape(-1, 1))
Category_1 = StandardScaler()
Category_1.fit_transform(project_data_train['Category_1'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Category_1_train_standardized = Category_1.transform(project_data_train['Category_0'].values.reshape(-1, 1))
SubCat_1 = StandardScaler()
SubCat_1.fit_transform(project_data_train['SubCategory_1'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
SubCat_1_train_standardized = SubCat_1.transform(project_data_train['SubCategory_1'].values.reshape(-1, 1))
SubCat_0 = StandardScaler()
SubCat_0.fit_transform(project_data_train['SubCategory_0'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
SubCat_0_train_standardized = SubCat_0.transform(project_data_train['SubCategory_0'].values.reshape(-1, 1))
State_0 = StandardScaler()
State_0.fit_transform(project_data_train['State_Class_0'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
State_0_train_standardized = State_0.transform(project_data_train['State_Class_0'].values.reshape(-1, 1))
State_1 = StandardScaler()
State_1.fit_transform(project_data_train['State_Class_1'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
State_1_standardized = State_1.transform(project_data_train['State_Class_1'].values.reshape(-1, 1))
Grade_1 = StandardScaler()
Grade_1.fit_transform(project_data_train['Grade_0'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Grade_1_standardized = Grade_1.transform(project_data_train['Grade_0'].values.reshape(-1, 1))
Grade_0 = StandardScaler()
Grade_0.fit_transform(project_data_train['Grade_1'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Grade_0_standardized = Grade_0.transform(project_data_train['Grade_1'].values.reshape(-1, 1))
Prefix_0 = StandardScaler()
Prefix_0.fit_transform(project_data_train['Prefix_Class_0'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Prefix_0_standardized = Prefix_0.transform(project_data_train['Prefix_Class_0'].values.reshape(-1, 1))
Prefix_1 = StandardScaler()
Prefix_1.fit_transform(project_data_train['Prefix_Class_1'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
#print(f"Mean : {essays.mean_[0]}, Standard deviation : {np.sqrt(essays.var_[0])}")
# Now standardize the data with above maen and variance.
Prefix_1_standardized = Prefix_1.transform(project_data_train['Prefix_Class_0'].values.reshape(-1, 1))
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
from scipy.sparse import hstack
# with the same hstack function we are concatinating a sparse matrix and a dense matirx :)
X_train = hstack((Category_1_train_standardized,Category_0_train_standardized,SubCat_1_train_standardized,SubCat_0_train_standardized,State_0_train_standardized,Grade_1_standardized,Grade_0_standardized,State_1_standardized,Prefix_0_standardized,Prefix_1_standardized,teacher_number_of_previously_posted_projects_standardized,price_standardized,title_bow,text_bow)).tocsr() #https://www.kaggle.com/c/quora-question-pairs/discussion/33491 taken from
X_train.shape
project_data_test.columns
Category_1_test_standardized = Category_1.transform(project_data_test['Category_1'].values.reshape(-1, 1))
Category_0_test_standardized = Category_0.transform(project_data_test['Category_0'].values.reshape(-1, 1))
SubCat_0_test_standardized = SubCat_0.transform(project_data_test['SubCategory_0'].values.reshape(-1, 1))
SubCat_1_test_standardized = SubCat_1.transform(project_data_test['SubCategory_1'].values.reshape(-1, 1))
State_0_test_standardized = State_0.transform(project_data_test['State_Class_0'].values.reshape(-1, 1))
State_1_test_standardized = State_1.transform(project_data_test['State_Class_1'].values.reshape(-1, 1))
Grade_0_test_standardized = Grade_0.transform(project_data_test['Grade_Class_0'].values.reshape(-1, 1))
Grade_1_test_standardized = Grade_1.transform(project_data_test['Grade_Class_1'].values.reshape(-1, 1))
Prefix_0_test_standardized = Prefix_0.transform(project_data_test['Prefix_Class_0'].values.reshape(-1, 1))
Prefix_1_test_standardized = Prefix_1.transform(project_data_test['Prefix_Class_1'].values.reshape(-1, 1))
X_test = hstack((Category_1_test_standardized,Category_0_test_standardized,SubCat_0_test_standardized,SubCat_1_test_standardized,State_0_test_standardized,State_1_test_standardized,Grade_0_test_standardized,Grade_1_test_standardized,Prefix_0_test_standardized,Prefix_1_test_standardized,teacher_number_of_previously_posted_projects_standardized_test,price_standardized_test,text_bow_test,title_bow_test)).tocsr() #https://www.kaggle.com/c/quora-question-pairs/discussion/33491 taken from
X_test.shape
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn import cross_validation
from math import log
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
C = RandomForestClassifier()
n_estimators=[10,50,100,500,1000]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
# https://plot.ly/python/3d-axes/
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
def model_predict(clf, data):
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_data_pred = []
y_data_pred.extend(clf.predict_proba(data[:])[:,1])
return y_data_pred
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = RandomForestClassifier(n_estimators=100,max_depth=5,class_weight='balanced')
neigh.fit(X_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X_train)
y_test_pred = model_predict(neigh, X_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def predict(proba, threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
predictions = []
for i in proba:
if i>=t:
predictions.append(1)
else:
predictions.append(0)
return predictions
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import GradientBoostingClassifier
C = GradientBoostingClassifier()
n_estimators=[10,50,100,500,1000]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
# https://plot.ly/python/3d-axes/
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
from sklearn.ensemble import GradientBoostingClassifier
neigh = GradientBoostingClassifier(n_estimators=50,max_depth=1)
neigh.fit(X_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X_train)
y_test_pred = model_predict(neigh, X_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
from scipy.sparse import hstack
# with the same hstack function we are concatinating a sparse matrix and a dense matirx :)
X1_train = hstack((Category_1_train_standardized,Category_0_train_standardized,SubCat_1_train_standardized,SubCat_0_train_standardized,State_0_train_standardized,Grade_1_standardized,Grade_0_standardized,State_1_standardized,Prefix_0_standardized,Prefix_1_standardized,teacher_number_of_previously_posted_projects_standardized,price_standardized,text_tfidf,tittle_tfidf)).tocsr() #https://www.kaggle.com/c/quora-question-pairs/discussion/33491 taken from
X1_train.shape
X1_test = hstack((Category_1_test_standardized,Category_0_test_standardized,SubCat_0_test_standardized,SubCat_1_test_standardized,State_0_test_standardized,State_1_test_standardized,Grade_0_test_standardized,Grade_1_test_standardized,Prefix_0_test_standardized,Prefix_1_test_standardized,teacher_number_of_previously_posted_projects_standardized_test,price_standardized_test,text_tfidf_test,title_tfidf_test)).tocsr() #https://www.kaggle.com/c/quora-question-pairs/discussion/33491 taken from
X1_test.shape
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
C = RandomForestClassifier()
n_estimators=[10,50,100,500,1000]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X1_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
# https://plot.ly/python/3d-axes/
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = RandomForestClassifier(n_estimators=100,max_depth=7,class_weight='balanced')
neigh.fit(X1_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X1_train)
y_test_pred = model_predict(neigh, X1_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import GradientBoostingClassifier
C = GradientBoostingClassifier()
n_estimators=[10,50,100,500,1000]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X1_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
# https://plot.ly/python/3d-axes/
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = GradientBoostingClassifier(n_estimators=50,max_depth=2)
neigh.fit(X1_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X1_train)
y_test_pred = model_predict(neigh, X1_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
from scipy.sparse import coo_matrix, hstack
X2_train = np.hstack(((Category_1_train_standardized),(Category_0_train_standardized),(Grade_1_standardized),(Grade_0_standardized),(State_0_train_standardized),(Prefix_0_standardized),(Prefix_1_standardized),(teacher_number_of_previously_posted_projects_standardized),(price_standardized),(SubCat_1_train_standardized),(SubCat_0_train_standardized),(State_1_standardized),avg_w2v_vectors,avg_w2v_vectors_tittle))
X2_train.shape
X2_test = np.hstack(((Category_1_test_standardized),(Category_0_test_standardized),(SubCat_0_test_standardized),(SubCat_1_test_standardized),(State_0_test_standardized),(State_1_test_standardized),(Grade_0_test_standardized),(Grade_1_test_standardized),(Prefix_0_test_standardized),(Prefix_1_test_standardized),(teacher_number_of_previously_posted_projects_standardized_test),(price_standardized_test),avg_w2v_vectors_test,avg_w2v_vectors_tittle_test))
X2_test.shape
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
C = RandomForestClassifier()
n_estimators=[10,50,100,200,500]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X2_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = RandomForestClassifier(n_estimators=50,max_depth=5,class_weight='balanced')
neigh.fit(X2_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X2_train)
y_test_pred = model_predict(neigh, X2_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
#from sklearn.model_sel
#ection import GridSearchCV
from sklearn.ensemble import GradientBoostingClassifier
C = GradientBoostingClassifier()
n_estimators=[10,50,100,200,500]
max_depth=[1, 5, 10, 50, 100, 500]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X2_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = GradientBoostingClassifier(n_estimators=50,max_depth=2)
neigh.fit(X2_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X2_train)
y_test_pred = model_predict(neigh, X2_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
from scipy.sparse import coo_matrix, hstack
X3_train = np.hstack(((Category_1_train_standardized),(Category_0_train_standardized),(Grade_1_standardized),(Grade_0_standardized),(State_0_train_standardized),(Prefix_0_standardized),(Prefix_1_standardized),(teacher_number_of_previously_posted_projects_standardized),(price_standardized),(SubCat_1_train_standardized),(SubCat_0_train_standardized),(State_1_standardized),tfidf_w2v_vectors,tfidf_w2v_vectors_Title))
X3_train.shape
X3_test = np.hstack(((Category_1_test_standardized),(Category_0_test_standardized),(SubCat_0_test_standardized),(SubCat_1_test_standardized),(State_0_test_standardized),(State_1_test_standardized),(Grade_0_test_standardized),(Grade_1_test_standardized),(Prefix_0_test_standardized),(Prefix_1_test_standardized),(teacher_number_of_previously_posted_projects_standardized_test),(price_standardized_test),tfidf_w2v_vectors_test,tfidf_w2v_vectors_Title_test))
X3_test.shape
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.model_selection import GridSearchCV
C = RandomForestClassifier()
n_estimators=[10,50,100,500,1000]
max_depth=[1, 5, 10, 50, 100, 500, 100]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X3_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = RandomForestClassifier(n_estimators=50,max_depth=5,class_weight='balanced')
neigh.fit(X3_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X3_train)
y_test_pred = model_predict(neigh, X3_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
#from sklearn.model_sel
#ection import GridSearchCV
from sklearn.ensemble import GradientBoostingClassifier
C = GradientBoostingClassifier()
n_estimators=[10,50,100,200,500]
max_depth=[1, 5, 10, 50, 100, 500]
import math
log_max_depth = [math.log10(x) for x in max_depth]
log_n_estimators=[math.log10(x) for x in n_estimators]
print("Printing parameter Data and Corresponding Log value for Max Depth")
data={'Parameter value':max_depth,'Corresponding Log Value':log_max_depth}
param=pd.DataFrame(data)
print("="*100)
print(param)
print("Printing parameter Data and Corresponding Log value for Estimators")
data={'Parameter value':n_estimators,'Corresponding Log Value':log_n_estimators}
param=pd.DataFrame(data)
print("="*100)
print(param)
parameters = {'n_estimators':n_estimators, 'max_depth':max_depth}
clf = GridSearchCV(C, parameters, cv=3, scoring='roc_auc',n_jobs=-1)
clf.fit(X3_train, labels_train)
#data={'Parameter value':[0.0001,0.001,0.01,0.1,1,5,10,20,30,40],'Corresponding Log Value':[log_my_data]}
train_auc= clf.cv_results_['mean_train_score']
train_auc_std= clf.cv_results_['std_train_score']
cv_auc = clf.cv_results_['mean_test_score']
cv_auc_std= clf.cv_results_['std_test_score']
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
import numpy as np
trace1 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=train_auc, name = 'train')
trace2 = go.Scatter3d(x=log_n_estimators,y=log_max_depth,z=cv_auc, name = 'Cross validation')
data = [trace1, trace2]
layout = go.Layout(scene = dict(
xaxis = dict(title='n_estimators'),
yaxis = dict(title='max_depth'),
zaxis = dict(title='AUC'),))
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig, filename='3d-scatter-colorscale')
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
#from sklearn.calibration import CalibratedClassifierCV
neigh = GradientBoostingClassifier(n_estimators=50,max_depth=2)
neigh.fit(X3_train, labels_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = model_predict(neigh, X3_train)
y_test_pred = model_predict(neigh, X3_test)
train_fpr, train_tpr, tr_thresholds = roc_curve(labels_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(labels_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("K: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
print("="*100)
from sklearn.metrics import confusion_matrix
print("Train confusion matrix")
cm=confusion_matrix(labels_train, predict(y_train_pred, tr_thresholds, train_fpr, train_fpr))
sns.heatmap(cm, annot=True, fmt="d" )
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Train")
print("Test confusion matrix")
cm1=confusion_matrix(labels_test, predict(y_test_pred, tr_thresholds, test_fpr, test_fpr))
sns.heatmap(cm1, annot=True,fmt="d")
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title("Confusion Matrix for Test")
from prettytable import PrettyTable
print("Pretty Table for Random Forest")
print("--"*50)
x = PrettyTable()
x.field_names = ["Vectorizer", "Model","n_estimators", "Max_depth", "Train AUC", "Test AUC"]
x.add_row(["BOW", "Brute(Decision Tree)","100", 5 , 75, 59])
x.add_row(["TFIDF", "Brute(Decision Tree)","100", 7, 74, 60])
x.add_row(["AVG W2V", "Brute(Decision Tree)","50", 5, 75, 64 ])
x.add_row(["TFIDF W2V", "Brute(Decision Tree)","50", 5, 68, 63 ])
print(x)
from prettytable import PrettyTable
print("Pretty Table for GBDT")
print("--"*50)
x = PrettyTable()
x.field_names = ["Vectorizer", "Model","n_estimators", "Max_depth", "Train AUC", "Test AUC"]
x.add_row(["BOW", "Brute(Decision Tree)","50", 1 , 66, 61])
x.add_row(["TFIDF", "Brute(Decision Tree)","50", 2, 70, 62])
x.add_row(["AVG W2V", "Brute(Decision Tree)","50", 2, 71, 61 ])
x.add_row(["TFIDF W2V", "Brute(Decision Tree)","50", 2, 72, 63 ])
print(x)